en
AI Ranking
每月不到10元,就可以无限制地访问最好的AIbase。立即成为会员
Home
News
Daily Brief
Income Guide
Tutorial
Tools Directory
Product Library
en
AI Ranking
Search AI Products and News
Explore worldwide AI information, discover new AI opportunities
AI News
AI Tools
AI Cases
AI Tutorial
Type :
AI News
AI Tools
AI Cases
AI Tutorial
2024-08-06 09:10:07
.
AIbase
.
10.8k
ControlMM: Multi-modal Input for Full-body Motion Generation from Text, Speech, and Music
ControlMM is an innovative technical framework developed jointly by the Chinese University of Hong Kong and Tencent, aimed at addressing challenges in multi-modal full-body motion generation. This framework supports multi-modal inputs from text, speech, and music to generate full-body motions that match the content. It employs the ControlMM-Attn module to process dynamic and static human topology in parallel, achieving efficient movement knowledge learning. A phased training strategy is adopted, progressing from text to motion pre-training, then to multi-modal control adaptation, ensuring the model's effectiveness under various conditions.